Stability of Learning in Classes of Recurrent and Feedforward Networks
نویسنده
چکیده
Stability of Learning in Classes of The relationship to work by Mozer, [5], on induction of temporal structure, is briefly described in [13]. Recurrent and Feedforward Networks In the research reported here, the task is similar to Elman’s: predicting the next letter in a word (or the end of William H. Wilson the word) from the current letter and the representation of past letters held in the state vector. While the original [email protected] motivation for this task was linguistic [12], the current paper focuses on the efficacy of a range of network School of Computer Science and Engineering architectures, and learning regimes, applied to the task. University of New South Wales Sydney 2052 Australia
منابع مشابه
Learning Performance of Networks like Elman ’ s Simple Recurrent Networks but having Multiple State Vectors
Target Papers: • William H. Wilson, A comparison of architectural alternatives for recurrent networks, Proceedings of the Fourth Australian Conference on Neural Networks, ACNN’93, Melbourne, 13 February 1993, 189-192. ftp://ftp.cse.unsw.edu.au/pub/users/billw/wilson.recurrent.ps.Z • William H. Wilson, Stability of learning in classes of recurrent and feedforward networks, in Proceedings of the ...
متن کاملRobust stability of stochastic fuzzy impulsive recurrent neural networks with\ time-varying delays
In this paper, global robust stability of stochastic impulsive recurrent neural networks with time-varyingdelays which are represented by the Takagi-Sugeno (T-S) fuzzy models is considered. A novel Linear Matrix Inequality (LMI)-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of uncertain fuzzy stochastic impulsive recurrent neural...
متن کاملDesign of Self-Constructing Recurrent-Neural- Network-Based Adaptive Control
Recently, neural-network-based adaptive control technique has attracted increasing attentions, because it has provided an efficient and effective way in the control of complex nonlinear or ill-defined systems (Duarte-Mermoud et al., 2005; Hsu et al., 2006; Lin and Hsu, 2003; Lin et al., 1999; Peng et al. 2004). The key elements of this success are the approximation capabilities of the neural ne...
متن کاملPhase-Space learning for recurrent networks
We study the problem of learning nonstatic attractors in recurrent networks. With concepts from dynamical systems theory, we show that this problem can be reduced to three sub-problems, (a) that of embedding the temporal trajectory in phase space, (b) approximating the local vector eld, and (c) function approximation using feedforward networks. This general framework overcomes problems with tra...
متن کامل997 Association for Computational Linguistics Recurrent Neural-network Learning of Phonological Regularities in Turkish 1 Network Architecture
Simple recurrent networks were trained with sequences of phonemes from a corpus of Turkish words. The network's task was to predict the next phoneme. The aim of the study was to look at the representations developed within the hidden layer of the network in order to investigate the extent to which such networks can learn phonological regularities from such input. It was found that in the diiere...
متن کامل